314 research outputs found
Combination of Accumulated Motion and Color Segmentation for Human Activity Analysis
The automated analysis of activity in digital multimedia, and especially video, is gaining more and more importance due to the evolution of higher-level video processing systems and the development of relevant applications such as surveillance and sports. This paper presents a novel algorithm for the recognition and classification of human activities, which employs motion and color characteristics in a complementary manner, so as to extract the most information from both sources, and overcome their individual limitations. The proposed method accumulates the flow estimates in a video, and extracts “regions of activity†by processing their higher-order statistics. The shape of these activity areas can be used for the classification of the human activities and events taking place in a video and the subsequent extraction of higher-level semantics. Color segmentation of the active and static areas of each video frame is performed to complement this information. The color layers in the activity and background areas are compared using the earth mover's distance, in order to achieve accurate object segmentation. Thus, unlike much existing work on human activity analysis, the proposed approach is based on general color and motion processing methods, and not on specific models of the human body and its kinematics. The combined use of color and motion information increases the method robustness to illumination variations and measurement noise. Consequently, the proposed approach can lead to higher-level information about human activities, but its applicability is not limited to specific human actions. We present experiments with various real video sequences, from sports and surveillance domains, to demonstrate the effectiveness of our approach
DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval
In this paper, we address the problem of high performance and computationally
efficient content-based video retrieval in large-scale datasets. Current
methods typically propose either: (i) fine-grained approaches employing
spatio-temporal representations and similarity calculations, achieving high
performance at a high computational cost or (ii) coarse-grained approaches
representing/indexing videos as global vectors, where the spatio-temporal
structure is lost, providing low performance but also having low computational
cost. In this work, we propose a Knowledge Distillation framework, which we
call Distill-and-Select (DnS), that starting from a well-performing
fine-grained Teacher Network learns: a) Student Networks at different retrieval
performance and computational efficiency trade-offs and b) a Selection Network
that at test time rapidly directs samples to the appropriate student to
maintain both high retrieval performance and high computational efficiency. We
train several students with different architectures and arrive at different
trade-offs of performance and efficiency, i.e., speed and storage requirements,
including fine-grained students that store index videos using binary
representations. Importantly, the proposed scheme allows Knowledge Distillation
in large, unlabelled datasets -- this leads to good students. We evaluate DnS
on five public datasets on three different video retrieval tasks and
demonstrate a) that our students achieve state-of-the-art performance in
several cases and b) that our DnS framework provides an excellent trade-off
between retrieval performance, computational speed, and storage space. In
specific configurations, our method achieves similar mAP with the teacher but
is 20 times faster and requires 240 times less storage space. Our collected
dataset and implementation are publicly available:
https://github.com/mever-team/distill-and-select
InDistill: Information flow-preserving knowledge distillation for model compression
In this paper we introduce InDistill, a model compression approach that
combines knowledge distillation and channel pruning in a unified framework for
the transfer of the critical information flow paths from a heavyweight teacher
to a lightweight student. Such information is typically collapsed in previous
methods due to an encoding stage prior to distillation. By contrast, InDistill
leverages a pruning operation applied to the teacher's intermediate layers
reducing their width to the corresponding student layers' width. In that way,
we force architectural alignment enabling the intermediate layers to be
directly distilled without the need of an encoding stage. Additionally, a
curriculum learning-based training scheme is adopted considering the
distillation difficulty of each layer and the critical learning periods in
which the information flow paths are created. The proposed method surpasses
state-of-the-art performance on three standard benchmarks, i.e. CIFAR-10,
CUB-200, and FashionMNIST by 3.08%, 14.27%, and 1% mAP, respectively, as well
as on more challenging evaluation settings, i.e. ImageNet and CIFAR-100 by
1.97% and 5.65% mAP, respectively
- …